The textbook for the Data Science course series is freely available online.
You will get started with data visualization and distributions in R.
You will learn how to use ggplot2 to create plots.
You will learn how to summarize data using dplyr.
You will see examples of ggplot2 and dplyr in action with the Gapminder dataset.
You will learn general principles to guide you in developing effective data visualizations.
Section 1 introduces you to Data Visualization and Distributions.
After completing Section 1, you will:
The textbook for this section is available here
Key points
Code
if(!require(dslabs)) install.packages("dslabs")
## Loading required package: dslabs
library(dslabs)
data(murders)
head(murders)
## state abb region population total
## 1 Alabama AL South 4779736 135
## 2 Alaska AK West 710231 19
## 3 Arizona AZ West 6392017 232
## 4 Arkansas AR South 2915918 93
## 5 California CA West 37253956 1257
## 6 Colorado CO West 5029196 65
The textbook for this section is available here
Key points
The textbook for this section is available here
Key points
We will be working with two types of variables: categorical and numeric. Each can be divided into two other groups: categorical can be ordinal or not, whereas numerical variables can be discrete or continuous.
We will review data types using some of the examples provided in the dslabs package. For example, the heights dataset.
library(dslabs)
data(heights)
data(heights)
names(heights)
## [1] "sex" "height"
sex is the first variable. We know what values are represented by this variable and can confirm this by looking at the first few entires:head(heights)
## sex height
## 1 Male 75
## 2 Male 70
## 3 Male 68
## 4 Male 74
## 5 Male 61
## 6 Female 65
What data type is the sex variable?
Although this is technically true, we usually reserve the term ordinal data for variables belonging to a small number of different groups, with each group having many members.
The height variable could be ordinal if, for example, we report a small number of values such as short, medium, and tall. Let’s explore how many unique values are used by the heights variable. For this we can use the unique function:
x <- c(3, 3, 3, 3, 4, 4, 2)
unique(x)
x <- heights$height
length(unique(x))
## [1] 139
For categorical data we can construct this distribution by simply computing the frequency of each unique value. This can be done with the function table. Here is an example:
x <- c(3, 3, 3, 3, 4, 4, 2)
table(x)
x <- heights$height
tab <- table(x)
In the previous exercise we computed the variable tab which reports the number of times each unique value appears. For values reported only once tab will be 1. Use logicals and the function sum to count the number of times this happens.
tab <- table(heights$height)
sum(tab==1)
## [1] 63
The textbook for this section is available:
Key points
prop.table to convert a table of counts to a frequency table. Barplots display the distribution of categorical variables and are a way to visualize the information in frequency tables.Code
# load the dataset
library(dslabs)
data(heights)
# make a table of category proportions
prop.table(table(heights$sex))
##
## Female Male
## 0.2266667 0.7733333
The textbook for this section is available here
Key points
A further note on histograms: note that the choice of binwidth has a determinative effect on shape. There is no “true” choice for binwidth, and you can sometimes gain insights into the data by experimenting with binwidths.
For example, the quality of a high school is sometimes summarized with one number: the average score on a standardized test. Occasionally, a second number is reported: the standard deviation. So, for example, you might read a report stating that scores were 680 plus or minus 50 (the standard deviation). The report has summarized an entire vector of scores with with just two numbers. Is this appropriate? Is there any important piece of information that we are missing by only looking at this summary rather than the entire list? We are going to learn when these 2 numbers are enough and when we need more elaborate summaries and plots to describe the data.
Our first data visualization building block is learning to summarize lists of factors or numeric vectors. The most basic statistical summary of a list of objects or numbers is its distribution. Once a vector has been summarized as distribution, there are several data visualization techniques to effectively relay this information. In later assessments we will practice to write code for data visualization. Here we start with some multiple choice questions to test your understanding of distributions and related basic plots.
In the murders dataset, the region is a categorical variable and on the right you can see its distribution. To the closest 5%, what proportion of the states are in the North Central region?
Region vs. Proportion
Which of the following is true:
Based on the plot, what percentage of males are shorter than 75 inches?
eCDF for male heights
m has the property that 1/2 of the male students are taller than m and 1/2 are shorter?eCDF of the murder rates across states
Knowing that there are 51 states (counting DC) and based on this plot, how many states have murder rates larger than 10 per 100,000 people?
heights dataset.Based on this plot, how many males are between 62.5 and 65.5?
Histogram of male heights
Density plot population
Three density plots
Which of the following statements is true?
The textbook for this section is available here
Key points
The normal distribution:
The standard deviation is the average distance between a value and the mean value.
Calculate the mean using the mean function.
Calculate the standard deviation using the sd function or manually.
Standard units describe how many standard deviations a value is away from the mean. The z-score, or number of standard deviations an observation x is away from the mean (\(\mu\)):
\(Z = \frac{x - \mu}{\sigma}\)
Compute standard units with the scale function.
Important: to calculate the proportion of values that meet a certain condition, use the mean function on a logical vector. Because TRUE is converted to 1 and FALSE is converted to 0, taking the mean of this vector yields the proportion of TRUE.
Equation for the normal distribution
The normal distribution is mathematically defined by the following formula for any mean \(\mu\) and standard deviation \(\sigma\):
\(Pr(a < x < b) = \int_{a}^{b} \frac{1}{\sqrt2\pi\sigma} e^-\frac{1}{2}(\frac{x - \mu}{\sigma})^2 dx\)
Code
if(!require(tidyverse)) install.packages("tidyverse")
## Loading required package: tidyverse
## ── Attaching packages ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── tidyverse 1.3.0 ──
## ✓ ggplot2 3.3.2 ✓ purrr 0.3.4
## ✓ tibble 3.0.3 ✓ dplyr 1.0.0
## ✓ tidyr 1.1.0 ✓ stringr 1.4.0
## ✓ readr 1.3.1 ✓ forcats 0.5.0
## ── Conflicts ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── tidyverse_conflicts() ──
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
# define x as vector of male heights
library(tidyverse)
index <- heights$sex=="Male"
x <- heights$height[index]
# calculate the mean and standard deviation manually
average <- sum(x)/length(x)
SD <- sqrt(sum((x - average)^2)/length(x))
# built-in mean and sd functions - note that the audio and printed values disagree
average <- mean(x)
SD <- sd(x)
c(average = average, SD = SD)
## average SD
## 69.314755 3.611024
# calculate standard units
z <- scale(x)
# calculate proportion of values within 2 SD of mean
mean(abs(z) < 2)
## [1] 0.9495074
Note about the sd function: The built-in R function sd calculates the standard deviation, but it divides by length(x)-1 instead of length(x). When the length of the list is large, this difference is negligible and you can use the built-in sd function. Otherwise, you should compute \(\sigma\) by hand. For this course series, assume that you should use the sd function unless you are told not to do so.
But can we summarize even further? We often see the average and standard deviation used as summary statistics: a two number summary! To understand what these summaries are and why they are so widely used, we need to understand the normal distribution.
The normal distribution, also known as the bell curve and as the Gaussian distribution, is one of the most famous mathematical concepts in history. A reason for this is that approximately normal distributions occur in many situations. Examples include gambling winnings, heights, weights, blood pressure, standardized test scores, and experimental measurement errors. Often data visualization is needed to confirm that our data follows a normal distribution.
Here we focus on how the normal distribution helps us summarize data and can be useful in practice.
One way the normal distribution is useful is that it can be used to approximate the distribution of a list of numbers without having access to the entire list. We will demonstrate this with the heights dataset.
Load the height data set and create a vector x with just the male heights:
library(dslabs)
data(heights)
x <- heights$height[heights$sex == "Male"]
What proportion of the data is between 69 and 72 inches (taller than 69 but shorter or equal to 72)? A proportion is between 0 and 1.
x <- heights$height[heights$sex == "Male"]
mean(x > 69 & x <= 72)
## [1] 0.3337438
We can compute the average and standard deviation like this:
library(dslabs)
data(heights)
x <- heights$height[heights$sex=="Male"]
avg <- mean(x)
stdev <- sd(x)
Suppose you only have avg and stdev below, but no access to x, can you approximate the proportion of the data that is between 69 and 72 inches?
Given a normal distribution with a mean mu and standard deviation sigma, you can calculate the proportion of observations less than or equal to a certain value with pnorm(value, mu, sigma). Notice that this is the CDF for the normal distribution. We will learn much more about pnorm later in the course series, but you can also learn more now with ?pnorm.
x <- heights$height[heights$sex=="Male"]
avg <- mean(x)
stdev <- sd(x)
pnorm(72, avg, stdev) - pnorm(69, avg, stdev)
## [1] 0.3061779
The normal distribution was a useful approximation for this case. However, the approximation is not always useful. An example is for the more extreme values, often called the “tails” of the distribution. Let’s look at an example. We can compute the proportion of heights between 79 and 81.
library(dslabs)
data(heights)
x <- heights$height[heights$sex == "Male"]
mean(x > 79 & x <= 81)
x <- heights$height[heights$sex == "Male"]
avg <- mean(x)
stdev <- sd(x)
exact <- mean(x > 79 & x <= 81)
approx <- pnorm(81, avg, stdev) - pnorm(79, avg, stdev)
exact
## [1] 0.004926108
approx
## [1] 0.003051617
exact/approx
## [1] 1.614261
First, we will estimate the proportion of adult men that are 7 feet tall or taller.
Assume that the distribution of adult men in the world as normally distributed with an average of 69 inches and a standard deviation of 3 inches.
# use pnorm to calculate the proportion over 7 feet (7*12 inches)
1 - pnorm(7*12, 69, 3)
## [1] 2.866516e-07
p, of men that are 7 feet tall or taller.We know that there are about 1 billion men between the ages of 18 and 40 in the world, the age range for the NBA.
Can we use the normal distribution to estimate how many of these 1 billion men are at least seven feet tall?
p <- 1 - pnorm(7*12, 69, 3)
round(p*10^9)
## [1] 287
p <- 1 - pnorm(7*12, 69, 3)
N <- round(p*10^9)
10/N
## [1] 0.03484321
p <- 1 - pnorm(7*12, 69, 3)
N <- round(p * 10^9)
10/N
Repeat the calculations performed in the previous question for Lebron James’ height: 6 feet 8 inches. There are about 150 players, instead of 10, that are at least that tall in the NBA.
## Change the solution to previous answer
p <- 1 - pnorm(7*12, 69, 3)
N <- round(p * 10^9)
10/N
## [1] 0.03484321
p <- 1 - pnorm(6*12+8, 69, 3)
N <- round(p * 10^9)
150/N
## [1] 0.001220842
What would be a fair critique of our calculations?
The textbook for this section is available here
Key points
quantile function.qnorm function. qnorm will calculate quantiles for the standard normal distribution (\(\mu = 0\), \(\sigma = 1\)) by default, but it can calculate quantiles for any normal distribution given mean and sd arguments. We will learn more about qnorm in the probability course.Code
# define x and z
index <- heights$sex=="Male"
x <- heights$height[index]
z <- scale(x)
# proportion of data below 69.5
mean(x <= 69.5)
## [1] 0.5147783
# calculate observed and theoretical quantiles
p <- seq(0.05, 0.95, 0.05)
observed_quantiles <- quantile(x, p)
theoretical_quantiles <- qnorm(p, mean = mean(x), sd = sd(x))
# make QQ-plot
plot(theoretical_quantiles, observed_quantiles)
abline(0,1)
# make QQ-plot with scaled values
observed_quantiles <- quantile(z, p)
theoretical_quantiles <- qnorm(p)
plot(theoretical_quantiles, observed_quantiles)
abline(0,1)
The textbook for this section is available here
Key points
0.01,0.02,...,0.99. They summarize the values at which a certain percent of the observations are equal to or less than that value.The textbook for this section is available here
Key points
male <- heights$height[heights$sex=="Male"]
female <- heights$height[heights$sex=="Female"]
length(male)
## [1] 812
length(female)
## [1] 238
quantile function like thislibrary(dslabs)
data(heights)
quantile(heights$height, seq(.01, 0.99, 0.01))
male <- heights$height[heights$sex=="Male"]
female <- heights$height[heights$sex=="Female"]
female_percentiles <- quantile(female, seq(0.1, 0.9, 0.2))
male_percentiles <- quantile(male, seq(0.1, 0.9, 0.2))
df <- data.frame(female = (female_percentiles), male = (male_percentiles))
df
## female male
## 10% 61.00000 65.00000
## 30% 63.00000 68.00000
## 50% 64.98031 69.00000
## 70% 66.46417 71.00000
## 90% 69.00000 73.22751
Continent vs Population
Which continent has the country with the largest population size?
Which continent has median country with the largest population?
To the nearest million, what is the median population size for Africa?
The textbook for this section is available here
Key points
library(HistData)
data(Galton)
x <- Galton$child
if(!require(HistData)) install.packages("HistData")
## Loading required package: HistData
## Warning: package 'HistData' was built under R version 4.0.2
library(HistData)
data(Galton)
x <- Galton$child
mean(x)
## [1] 68.08847
median(x)
## [1] 68.2
x <- Galton$child
sd(x)
## [1] 2.517941
mad(x)
## [1] 2.9652
Now suppose that suppose Galton made a mistake when entering the first value, forgetting to use the decimal point. You can imitate this error by typing:
library(HistData)
data(Galton)
x <- Galton$child
x_with_error <- x
x_with_error[1] <- x_with_error[1]*10
The data now has an outlier that the normal approximation does not account for. Let’s see how this affects the average.
x <- Galton$child
x_with_error <- x
x_with_error[1] <- x_with_error[1]*10
gem <- mean(x)
gem_error <- mean(x_with_error)
gem_error - gem
## [1] 0.5983836
Now let’s explore the effect this outlier has on the standard deviation.
x_with_error <- x
x_with_error[1] <- x_with_error[1]*10
sd(x_with_error)- sd(x)
## [1] 15.6746
Now we are going to see how the median and MAD are much more resistant to outliers. For this reason we say that they are robust summaries.
x_with_error <- x
x_with_error[1] <- x_with_error[1]*10
mediaan <- median(x)
mediaan_error <- median(x_with_error)
mediaan_error - mediaan
## [1] 0
We saw that the median barely changes. Now let’s see how the MAD is affected.
x_with_error <- x
x_with_error[1] <- x_with_error[1]*10
mad_normal <- mad(x)
mad_error <- mad(x_with_error)
mad_error - mad_normal
## [1] 0
But how large can this effect get? This of course depends on the size of the outlier and the size of the dataset.
To see how outliers can affect the average of a dataset, let’s write a simple function that takes the size of the outlier as input and returns the average.
x <- Galton$child
error_avg <- function(k){
x[1] = k
mean(x)
}
error_avg(10000)
## [1] 78.79784
error_avg(-10000)
## [1] 57.24612
In Section 2, you will learn how to create data visualizations in R using ggplot2.
After completing Section 2, you will:
Note that it can be hard to memorize all of the functions and arguments used by ggplot2, so we recommend that you have a cheat sheet handy to help you remember the necessary commands.
The textbook for this section is available here
Key points
library(tidyverse).library(ggplot2), instead of loading the entire tidyverse.The textbook for this section is available here
Key points
The textbook for this section is available here
Key points
ggplot(data = x)ggplot(x)x %>% ggplot()Code
ggplot(data = murders)
murders %>% ggplot()
p <- ggplot(data = murders)
class(p)
## [1] "gg" "ggplot"
print(p) # this is equivalent to simply typing p
The functions above render a plot, in this case a blank slate since no geometry has been defined. The only style choice we see is a grey background.
The textbook for this section is available:
Key points
DATA %>% ggplot() + LAYER_1 + LAYER_2 + ... + LAYER_Ngeom_X where X is the plot type.aes function.total rather than murders$total).geom_point creates a scatterplot and requires x and y aesthetic mappings.geom_text and geom_label add text to a scatterplot and require x, y, and label aesthetic mappings.Code: Adding layers to a plot
murders %>% ggplot() +
geom_point(aes(x = population/10^6, y = total))
# add points layer to predefined ggplot object
p <- ggplot(data = murders)
p + geom_point(aes(population/10^6, total))
# add text layer to scatterplot
p + geom_point(aes(population/10^6, total)) +
geom_text(aes(population/10^6, total, label = abb))
Code: Example of aes behavior
# no error from this call
p_test <- p + geom_text(aes(population/10^6, total, label = abb))
# error - "abb" is not a globally defined variable and cannot be found outside of aes
p_test <- p + geom_text(aes(population/10^6, total), label = abb)
The textbook for this section is available here and here
Key points
ggplot. All the geometries added as layers will default to this mapping. Local aesthetic mappings add additional information or override the default mappings.Code
# change the size of the points
p + geom_point(aes(population/10^6, total), size = 3) +
geom_text(aes(population/10^6, total, label = abb))
# move text labels slightly to the right
p + geom_point(aes(population/10^6, total), size = 3) +
geom_text(aes(population/10^6, total, label = abb), nudge_x = 1)
# simplify code by adding global aesthetic
p <- murders %>% ggplot(aes(population/10^6, total, label = abb))
p + geom_point(size = 3) +
geom_text(nudge_x = 1.5)
# local aesthetics override global aesthetics
p + geom_point(size = 3) +
geom_text(aes(x = 10, y = 800, label = "Hello there!"))
The textbook for this section is available:
Key points
scale_x_continuous(trans = "log10") or scale_x_log10. Similar functions exist for the y-axis.xlab and ylab functions. Add a plot title with the ggtitle function.col argument within aes. To color all points the same way, define col outside of aes.geom_abline geometry. geom_abline takes arguments slope (default = 1) and intercept (default = 0). Change the color with col or color and line type with lty.scale_color_discrete.Code: Log-scale the x- and y-axis
# define p
p <- murders %>% ggplot(aes(population/10^6, total, label = abb))
# log base 10 scale the x-axis and y-axis
p + geom_point(size = 3) +
geom_text(nudge_x = 0.05) +
scale_x_continuous(trans = "log10") +
scale_y_continuous(trans = "log10")
# efficient log scaling of the axes
p + geom_point(size = 3) +
geom_text(nudge_x = 0.075) +
scale_x_log10() +
scale_y_log10()
Code: Add labels and title
p + geom_point(size = 3) +
geom_text(nudge_x = 0.075) +
scale_x_log10() +
scale_y_log10() +
xlab("Population in millions (log scale)") +
ylab("Total number of murders (log scale)") +
ggtitle("US Gun Murders in 2010")
Code: Change color of the points
# redefine p to be everything except the points layer
p <- murders %>%
ggplot(aes(population/10^6, total, label = abb)) +
geom_text(nudge_x = 0.075) +
scale_x_log10() +
scale_y_log10() +
xlab("Population in millions (log scale)") +
ylab("Total number of murders (log scale)") +
ggtitle("US Gun Murders in 2010")
# make all points blue
p + geom_point(size = 3, color = "blue")
# color points by region
p + geom_point(aes(col = region), size = 3)
Code: Add a line with average murder rate
# define average murder rate
r <- murders %>%
summarize(rate = sum(total) / sum(population) * 10^6) %>%
pull(rate)
# basic line with average murder rate for the country
p + geom_point(aes(col = region), size = 3) +
geom_abline(intercept = log10(r)) # slope is default of 1
# change line to dashed and dark grey, line under points
p +
geom_abline(intercept = log10(r), lty = 2, color = "darkgrey") +
geom_point(aes(col = region), size = 3)
Code: Change legend title
p <- p + scale_color_discrete(name = "Region") # capitalize legend title
The textbook for this section is available here and here
Key points
theme function.geom_text_repel.Code: Adding themes
if(!require(ggthemes)) install.packages("ggthemes")
## Loading required package: ggthemes
## Warning: package 'ggthemes' was built under R version 4.0.2
# theme used for graphs in the textbook and course
ds_theme_set()
# themes from ggthemes
library(ggthemes)
p + theme_economist() # style of the Economist magazine
p + theme_fivethirtyeight() # style of the FiveThirtyEight website
Code: Putting it all together to assemble the plot
if(!require(ggrepel)) install.packages("ggrepel")
## Loading required package: ggrepel
## Warning: package 'ggrepel' was built under R version 4.0.2
# load libraries
library(ggrepel)
# define the intercept
r <- murders %>%
summarize(rate = sum(total) / sum(population) * 10^6) %>%
.$rate
# make the plot, combining all elements
murders %>%
ggplot(aes(population/10^6, total, label = abb)) +
geom_abline(intercept = log10(r), lty = 2, color = "darkgrey") +
geom_point(aes(col = region), size = 3) +
geom_text_repel() +
scale_x_log10() +
scale_y_log10() +
xlab("Population in millions (log scale)") +
ylab("Total number of murders (log scale)") +
ggtitle("US Gun Murders in 2010") +
scale_color_discrete(name = "Region") +
theme_economist()
The textbook for this section is available:
Key points
geom_histogram creates a histogram. Use the binwidth argument to change the width of bins, the fill argument to change the bar fill color, and the col argument to change bar outline color.geom_density creates smooth density plots. Change the fill color of the plot with the fill argument.geom_qq creates a quantile-quantile plot. This geometry requires the sample argument. By default, the data are compared to a standard normal distribution with a mean of 0 and standard deviation of 1. This can be changed with the dparams argument, or the sample data can be scaled.grid.arrange function from the gridExtra package. First, create the plots and save them to objects (p1, p2, …). Then pass the plot objects to grid.arrange.Code: Histograms in ggplot2
# define p
p <- heights %>%
filter(sex == "Male") %>%
ggplot(aes(x = height))
# basic histograms
p + geom_histogram()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
p + geom_histogram(binwidth = 1)
# histogram with blue fill, black outline, labels and title
p + geom_histogram(binwidth = 1, fill = "blue", col = "black") +
xlab("Male heights in inches") +
ggtitle("Histogram")
Code: Smooth density plots in ggplot2
p + geom_density()
p + geom_density(fill = "blue")
Code: Quantile-quantile plots in ggplot2
# basic QQ-plot
p <- heights %>% filter(sex == "Male") %>%
ggplot(aes(sample = height))
p + geom_qq()
# QQ-plot against a normal distribution with same mean/sd as data
params <- heights %>%
filter(sex == "Male") %>%
summarize(mean = mean(height), sd = sd(height))
p + geom_qq(dparams = params) +
geom_abline()
# QQ-plot of scaled data against the standard normal distribution
heights %>%
ggplot(aes(sample = scale(height))) +
geom_qq() +
geom_abline()
Code: Grids of plots with the grid.extra package
if(!require(gridExtra)) install.packages("gridExtra")
## Loading required package: gridExtra
##
## Attaching package: 'gridExtra'
## The following object is masked from 'package:dplyr':
##
## combine
# define plots p1, p2, p3
p <- heights %>% filter(sex == "Male") %>% ggplot(aes(x = height))
p1 <- p + geom_histogram(binwidth = 1, fill = "blue", col = "black")
p2 <- p + geom_histogram(binwidth = 2, fill = "blue", col = "black")
p3 <- p + geom_histogram(binwidth = 3, fill = "blue", col = "black")
# arrange plots next to each other in 1 row, 3 columns
library(gridExtra)
grid.arrange(p1, p2, p3, ncol = 3)
murders data.library(dplyr)
library(ggplot2)
library(dslabs)
data(murders)
Note that you can load both dplyr and ggplot2, as well as other packages, by installing and loading the tidyverse package.
With ggplot2 plots can be saved as objects. For example we can associate a dataset with a plot object like this
p <- ggplot(data = murders)
Because data is the first argument we don’t need to spell it out. So we can write this instead:
p <- ggplot(murders)
or, if we load dplyr, we can use the pipe:
p <- murders %>% ggplot()
Remember the pipe sends the object on the left of %>% to be the first argument for the function the right of %>%.
Now let’s get an introduction to ggplot.
if(!require(dplyr)) install.packages("dplyr")
library(dplyr)
p <- ggplot(murders)
class(p)
## [1] "gg" "ggplot"
print or simply type the object. For example, instead ofx <- 2
print(x)
you can simply type
x <-2
x
Print the object p defined in exercise one
p <- ggplot(murders)
and describe what you see.
ggplot.# define ggplot object called p like in the previous exercise but using a pipe
p <- heights %>% ggplot()
p # a blank slate plot
Explore the murders data frame to remind yourself of the names for the two variables (total murders and population size) we want to plot and select the correct answer.
geom_point.The aesthetic mappings require us to define the x-axis and y-axis variables respectively. So the code looks like this:
murders %>% ggplot(aes(x = , y = )) +
geom_point()
except we have to fill in the blanks to define the two variables x and y.
## Fill in the blanks
murders %>% ggplot(aes(x =population , y =total )) +
geom_point()
murders %>% ggplot(aes(population, total)) +
geom_point()
However, note that the following code
murders %>% ggplot(aes(population, total)) +
geom_label()
will give us the error message: Error: geom_label requires the following missing aesthetics: label
Why is this?
## edit the next line to add the label
murders %>% ggplot(aes(population, total, label = abb)) + geom_label()
murders %>% ggplot(aes(population, total, label= abb)) +
geom_label()
Now we will edit this code.
murders %>% ggplot(aes(population, total,label= abb)) +
geom_label(color="blue")
So the states from the West will be one color, states from the Northeast another, and so on.
In this case, which of the following is most appropriate:
murders %>% ggplot(aes(population, total, label = abb)) +
geom_label()
We are now going to add color to represent the region.
## edit this code
murders %>% ggplot(aes(population, total, label = abb, color=region)) +
geom_label()
Let’s start by defining an object p that holds the plot we have made up to now:
p <- murders %>% ggplot(aes(population, total, label = abb, color = region)) +
geom_label()
To change the x-axis to a log scale we learned about the scale_x_log10() function. We can change the axis by adding this layer to the object p to change the scale and render the plot using the following code:
p + scale_x_log10()
p <- murders %>% ggplot(aes(population, total, label = abb, color = region)) + geom_label()
## add layers to p here
p + scale_x_log10() + scale_y_log10()
library(dplyr)
library(ggplot2)
library(dslabs)
data(murders)
p<- murders %>% ggplot(aes(population, total, label = abb, color = region)) +
geom_label()
p + scale_x_log10() + scale_y_log10()
We are now going to add a title to this plot. We will do this by adding yet another layer, this time with the function ggtitle.
p <- murders %>% ggplot(aes(population, total, label = abb, color = region)) + geom_label()
# add a layer to add title to the next line
p + scale_x_log10() + scale_y_log10() + ggtitle("Gun murder data")
murders dataset to explore the heights dataset.We use the geom_histogram function to make a histogram of the heights in the heights data frame. When reading the documentation for this function we see that it requires just one mapping, the values to be used for the histogram.
What is the variable containing the heights in inches in the heights data frame?
The following code has been pre-run for you to load the heights dataset:
library(dplyr)
library(ggplot2)
library(dslabs)
data(heights)
# define p here
p <- heights %>% ggplot(aes(height))
p <- heights %>%
ggplot(aes(height))
## add a layer to p
p + geom_histogram()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
stat_bin() using bins = 30. Pick better value with binwidth.
p <- heights %>%
ggplot(aes(height))
## add the geom_histogram layer but with the requested argument
p + geom_histogram(binwidth = 1)
In this case, we will not make an object p. Instead we will render the plot using a single line of code. In the previous exercise, we could have created a histogram using one line of code like this:
heights %>%
ggplot(aes(height)) +
geom_histogram()
## add the correct layer using +
heights %>%
ggplot(aes(height)) + geom_density()
We can do this using the group argument within the aes mapping. Because each point will be assigned to a different density depending on a variable from the dataset, we need to map within aes.
## add the group argument then a layer with +
heights %>%
ggplot(aes(height, group = sex)) + geom_density()
heights %>%
ggplot(aes(height, group = sex)) +
geom_density()
We can also assign groups through the color or fill argument. For example, if you type color = sex ggplot knows you want a different color for each sex. So two densities must be drawn. You can therefore skip the group = sex mapping. Using color has the added benefit that it uses color to distinguish the groups. Change the density plots from the previous exercise to add color.
## edit the next line to use color instead of group then add a density layer
heights %>%
ggplot(aes(height, color = sex)) + geom_density()
fill argument.When using the geom_density geometry, color creates a colored line for the smooth density plot while fill colors in the area under the curve.
We can see what this looks like by running the following code:
heights %>%
ggplot(aes(height, fill = sex)) +
geom_density()
However, here the second density is drawn over the other. We can change this by using something called alpha blending.
heights %>%
ggplot(aes(height, fill = sex)) +
geom_density(alpha=0.2)
Section 3 introduces you to summarizing with dplyr.
After completing Section 3, you will:
The textbook for this section is available here
Key points
summarize from the dplyr/tidyverse package computes summary statistics from the data frame. It returns a data frame whose column names are defined within the function call.summarize can compute any summary function that operates on vectors and returns a single value, but it cannot operate on functions that return multiple values.summarize is aware of variable names within data frames and can use them directly.Code
# compute average and standard deviation for males
s <- heights %>%
filter(sex == "Male") %>%
summarize(average = mean(height), standard_deviation = sd(height))
# access average and standard deviation from summary table
s$average
## [1] 69.31475
s$standard_deviation
## [1] 3.611024
# compute median, min and max
heights %>%
filter(sex == "Male") %>%
summarize(median = median(height),
minimum = min(height),
maximum = max(height))
## median minimum maximum
## 1 69 50 82.67717
# alternative way to get min, median, max in base R
quantile(heights$height, c(0, 0.5, 1))
## 0% 50% 100%
## 50.00000 68.50000 82.67717
# generates an error: summarize can only take functions that return a single value
heights %>%
filter(sex == "Male") %>%
summarize(range = quantile(height, c(0, 0.5, 1)))
The textbook for this section is available here
Note that a common replacement for the dot operator is the pull function. Here is the textbook section on the pull function.
Key points
%>% character. The dot is a placeholder for the data being passed in through the pipe.dplyr functions to return single vectors or numbers instead of only data frames.us_murder_rate %>% .$rate is equivalent to us_murder_rate$rate.us_murder_rate %>% pull(rate). The pull function will be used in later course material.Code
murders <- murders %>% mutate(murder_rate = total/population*100000)
summarize(murders, mean(murder_rate))
## mean(murder_rate)
## 1 2.779125
# calculate US murder rate, generating a data frame
us_murder_rate <- murders %>%
summarize(rate = sum(total) / sum(population) * 100000)
us_murder_rate
## rate
## 1 3.034555
# extract the numeric US murder rate with the dot operator
us_murder_rate %>% .$rate
## [1] 3.034555
# calculate and extract the murder rate with one pipe
us_murder_rate <- murders %>%
summarize(rate = sum(total) / sum(population * 100000)) %>%
.$rate
The textbook for this section is available here
Key points
group_by function from dplyr converts a data frame to a grouped data frame, creating groups using one or more variables.summarize and some other dplyr functions will behave differently on grouped data frames.summarize on a grouped data frame computes the summary statistics for each of the separate groups.Code
# compute separate average and standard deviation for male/female heights
heights %>%
group_by(sex) %>%
summarize(average = mean(height), standard_deviation = sd(height))
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 2 x 3
## sex average standard_deviation
## <fct> <dbl> <dbl>
## 1 Female 64.9 3.76
## 2 Male 69.3 3.61
# compute median murder rate in 4 regions of country
murders <- murders %>%
mutate(murder_rate = total/population * 100000)
murders %>%
group_by(region) %>%
summarize(median_rate = median(murder_rate))
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 4 x 2
## region median_rate
## <fct> <dbl>
## 1 Northeast 1.80
## 2 South 3.40
## 3 North Central 1.97
## 4 West 1.29
The textbook for this section is available here
Key points
arrange function from dplyr sorts a data frame by a given column.arrange sorts in ascending order (lowest to highest). To instead sort in descending order, use the function desc inside of arrange.arrange by multiple levels: within equivalent values of the first level, observations are sorted by the second level, and so on.top_n function shows the top results ranked by a given variable, but the results are not ordered. You can combine top_n with arrange to return the top results in order.Code
# set up murders object
murders <- murders %>%
mutate(murder_rate = total/population * 100000)
# arrange by population column, smallest to largest
murders %>% arrange(population) %>% head()
## state abb region population total murder_rate
## 1 Wyoming WY West 563626 5 0.8871131
## 2 District of Columbia DC South 601723 99 16.4527532
## 3 Vermont VT Northeast 625741 2 0.3196211
## 4 North Dakota ND North Central 672591 4 0.5947151
## 5 Alaska AK West 710231 19 2.6751860
## 6 South Dakota SD North Central 814180 8 0.9825837
# arrange by murder rate, smallest to largest
murders %>% arrange(murder_rate) %>% head()
## state abb region population total murder_rate
## 1 Vermont VT Northeast 625741 2 0.3196211
## 2 New Hampshire NH Northeast 1316470 5 0.3798036
## 3 Hawaii HI West 1360301 7 0.5145920
## 4 North Dakota ND North Central 672591 4 0.5947151
## 5 Iowa IA North Central 3046355 21 0.6893484
## 6 Idaho ID West 1567582 12 0.7655102
# arrange by murder rate in descending order
murders %>% arrange(desc(murder_rate)) %>% head()
## state abb region population total murder_rate
## 1 District of Columbia DC South 601723 99 16.452753
## 2 Louisiana LA South 4533372 351 7.742581
## 3 Missouri MO North Central 5988927 321 5.359892
## 4 Maryland MD South 5773552 293 5.074866
## 5 South Carolina SC South 4625364 207 4.475323
## 6 Delaware DE South 897934 38 4.231937
# arrange by region alphabetically, then by murder rate within each region
murders %>% arrange(region, murder_rate) %>% head()
## state abb region population total murder_rate
## 1 Vermont VT Northeast 625741 2 0.3196211
## 2 New Hampshire NH Northeast 1316470 5 0.3798036
## 3 Maine ME Northeast 1328361 11 0.8280881
## 4 Rhode Island RI Northeast 1052567 16 1.5200933
## 5 Massachusetts MA Northeast 6547629 118 1.8021791
## 6 New York NY Northeast 19378102 517 2.6679599
# show the top 10 states with highest murder rate, not ordered by rate
murders %>% top_n(10, murder_rate)
## state abb region population total murder_rate
## 1 Arizona AZ West 6392017 232 3.629527
## 2 Delaware DE South 897934 38 4.231937
## 3 District of Columbia DC South 601723 99 16.452753
## 4 Georgia GA South 9920000 376 3.790323
## 5 Louisiana LA South 4533372 351 7.742581
## 6 Maryland MD South 5773552 293 5.074866
## 7 Michigan MI North Central 9883640 413 4.178622
## 8 Mississippi MS South 2967297 120 4.044085
## 9 Missouri MO North Central 5988927 321 5.359892
## 10 South Carolina SC South 4625364 207 4.475323
# show the top 10 states with highest murder rate, ordered by rate
murders %>% arrange(desc(murder_rate)) %>% top_n(10)
## Selecting by murder_rate
## state abb region population total murder_rate
## 1 District of Columbia DC South 601723 99 16.452753
## 2 Louisiana LA South 4533372 351 7.742581
## 3 Missouri MO North Central 5988927 321 5.359892
## 4 Maryland MD South 5773552 293 5.074866
## 5 South Carolina SC South 4625364 207 4.475323
## 6 Delaware DE South 897934 38 4.231937
## 7 Michigan MI North Central 9883640 413 4.178622
## 8 Mississippi MS South 2967297 120 4.044085
## 9 Georgia GA South 9920000 376 3.790323
## 10 Arizona AZ West 6392017 232 3.629527
To practice our dplyr skills we will be working with data from the survey collected by the United States National Center for Health Statistics (NCHS). This center has conducted a series of health and nutrition surveys since the 1960’s.
Starting in 1999, about 5,000 individuals of all ages have been interviewed every year and then they complete the health examination component of the survey. Part of this dataset is made available via the NHANES package which can be loaded this way:
if(!require(NHANES)) install.packages("NHANES")
## Loading required package: NHANES
## Warning: package 'NHANES' was built under R version 4.0.2
library(NHANES)
data(NHANES)
The NHANES data has many missing values. Remember that the main summarization function in R will return NA if any of the entries of the input vector is an NA. Here is an example:
data(na_example)
mean(na_example)
## [1] NA
sd(na_example)
## [1] NA
To ignore the NAs, we can use the na.rm argument:
mean(na_example, na.rm = TRUE)
## [1] 2.301754
sd(na_example, na.rm = TRUE)
## [1] 1.22338
Try running this code, then let us know you are ready to proceed with the analysis.
NHANES data. We will be exploring blood pressure in this dataset.First let’s select a group to set the standard. We will use 20-29 year old females. Note that the category is coded with 20-29, with a space in front of the 20! The AgeDecade is a categorical variable with these ages.
To know if someone is female, you can look at the Gender variable.
## fill in what is needed
tab <- NHANES %>% filter(AgeDecade == " 20-29" & Gender == "female")
head(tab)
## # A tibble: 6 x 76
## ID SurveyYr Gender Age AgeDecade AgeMonths Race1 Race3 Education
## <int> <fct> <fct> <int> <fct> <int> <fct> <fct> <fct>
## 1 51710 2009_10 female 26 " 20-29" 319 White <NA> College …
## 2 51731 2009_10 female 28 " 20-29" 346 Black <NA> High Sch…
## 3 51741 2009_10 female 21 " 20-29" 253 Black <NA> Some Col…
## 4 51741 2009_10 female 21 " 20-29" 253 Black <NA> Some Col…
## 5 51760 2009_10 female 27 " 20-29" 334 Hisp… <NA> 9 - 11th…
## 6 51764 2009_10 female 29 " 20-29" 357 White <NA> College …
## # … with 67 more variables: MaritalStatus <fct>, HHIncome <fct>,
## # HHIncomeMid <int>, Poverty <dbl>, HomeRooms <int>, HomeOwn <fct>,
## # Work <fct>, Weight <dbl>, Length <dbl>, HeadCirc <dbl>, Height <dbl>,
## # BMI <dbl>, BMICatUnder20yrs <fct>, BMI_WHO <fct>, Pulse <int>,
## # BPSysAve <int>, BPDiaAve <int>, BPSys1 <int>, BPDia1 <int>, BPSys2 <int>,
## # BPDia2 <int>, BPSys3 <int>, BPDia3 <int>, Testosterone <dbl>,
## # DirectChol <dbl>, TotChol <dbl>, UrineVol1 <int>, UrineFlow1 <dbl>,
## # UrineVol2 <int>, UrineFlow2 <dbl>, Diabetes <fct>, DiabetesAge <int>,
## # HealthGen <fct>, DaysPhysHlthBad <int>, DaysMentHlthBad <int>,
## # LittleInterest <fct>, Depressed <fct>, nPregnancies <int>, nBabies <int>,
## # Age1stBaby <int>, SleepHrsNight <int>, SleepTrouble <fct>,
## # PhysActive <fct>, PhysActiveDays <int>, TVHrsDay <fct>, CompHrsDay <fct>,
## # TVHrsDayChild <int>, CompHrsDayChild <int>, Alcohol12PlusYr <fct>,
## # AlcoholDay <int>, AlcoholYear <int>, SmokeNow <fct>, Smoke100 <fct>,
## # Smoke100n <fct>, SmokeAge <int>, Marijuana <fct>, AgeFirstMarij <int>,
## # RegularMarij <fct>, AgeRegMarij <int>, HardDrugs <fct>, SexEver <fct>,
## # SexAge <int>, SexNumPartnLife <int>, SexNumPartYear <int>, SameSex <fct>,
## # SexOrientation <fct>, PregnantNow <fct>
You will determine the average and standard deviation of systolic blood pressure, which are stored in the BPSysAve variable in the NHANES dataset.
## complete this line of code.
ref <- NHANES %>% filter(AgeDecade == " 20-29" & Gender == "female") %>% summarize(average = mean(BPSysAve, na.rm = TRUE), standard_deviation = sd(BPSysAve, na.rm = TRUE))
ref
## # A tibble: 1 x 2
## average standard_deviation
## <dbl> <dbl>
## 1 108. 10.1
For this exercise, you should review how to use the place holder . in dplyr or the pull function.
## modify the code we wrote for previous exercise.
ref_avg <- NHANES %>%
filter(AgeDecade == " 20-29" & Gender == "female") %>%
summarize(average = mean(BPSysAve, na.rm = TRUE),
standard_deviation = sd(BPSysAve, na.rm=TRUE)) %>% .$average
ref_avg
## [1] 108.4224
Again we will do it for the BPSysAve variable and the group of 20-29 year old females.
## complete the line
NHANES %>%
filter(AgeDecade == " 20-29" & Gender == "female") %>% summarize(minbp = min(BPSysAve, na.rm = TRUE),
maxbp = max(BPSysAve, na.rm=TRUE))
## # A tibble: 1 x 2
## minbp maxbp
## <int> <int>
## 1 84 179
group_by function.What we are about to do is a very common operation in data science: you will split a data table into groups and then compute summary statistics for each group.
We will compute the average and standard deviation of systolic blood pressure for females for each age group separately. Remember that the age groups are contained in AgeDecade.
##complete the line with group_by and summarize
NHANES %>%
filter(Gender == "female") %>% group_by(AgeDecade) %>% summarize(average = mean(BPSysAve, na.rm = TRUE),
standard_deviation = sd(BPSysAve, na.rm=TRUE))
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 9 x 3
## AgeDecade average standard_deviation
## <fct> <dbl> <dbl>
## 1 " 0-9" 100. 9.07
## 2 " 10-19" 104. 9.46
## 3 " 20-29" 108. 10.1
## 4 " 30-39" 111. 12.3
## 5 " 40-49" 115. 14.5
## 6 " 50-59" 122. 16.2
## 7 " 60-69" 127. 17.1
## 8 " 70+" 134. 19.8
## 9 <NA> 142. 22.9
group_by some more.We are going to repeat the previous exercise of calculating the average and standard deviation of systolic blood pressure, but for males instead of females.
This time we will not provide much sample code. You are on your own!
NHANES %>%
filter(Gender == "male") %>% group_by(AgeDecade) %>% summarize(average = mean(BPSysAve, na.rm = TRUE),
standard_deviation = sd(BPSysAve, na.rm=TRUE))
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 9 x 3
## AgeDecade average standard_deviation
## <fct> <dbl> <dbl>
## 1 " 0-9" 97.4 8.32
## 2 " 10-19" 110. 11.2
## 3 " 20-29" 118. 11.3
## 4 " 30-39" 119. 12.3
## 5 " 40-49" 121. 14.0
## 6 " 50-59" 126. 17.8
## 7 " 60-69" 127. 17.5
## 8 " 70+" 130. 18.7
## 9 <NA> 136. 23.5
This is because group_by permits us to group by more than one variable.
We can use group_by(AgeDecade, Gender) to group by both age decades and gender.
NHANES %>% group_by(AgeDecade, Gender) %>% summarize(average = mean(BPSysAve, na.rm = TRUE),
standard_deviation = sd(BPSysAve, na.rm=TRUE))
## `summarise()` regrouping output by 'AgeDecade' (override with `.groups` argument)
## # A tibble: 18 x 4
## # Groups: AgeDecade [9]
## AgeDecade Gender average standard_deviation
## <fct> <fct> <dbl> <dbl>
## 1 " 0-9" female 100. 9.07
## 2 " 0-9" male 97.4 8.32
## 3 " 10-19" female 104. 9.46
## 4 " 10-19" male 110. 11.2
## 5 " 20-29" female 108. 10.1
## 6 " 20-29" male 118. 11.3
## 7 " 30-39" female 111. 12.3
## 8 " 30-39" male 119. 12.3
## 9 " 40-49" female 115. 14.5
## 10 " 40-49" male 121. 14.0
## 11 " 50-59" female 122. 16.2
## 12 " 50-59" male 126. 17.8
## 13 " 60-69" female 127. 17.1
## 14 " 60-69" male 127. 17.5
## 15 " 70+" female 134. 19.8
## 16 " 70+" male 130. 18.7
## 17 <NA> female 142. 22.9
## 18 <NA> male 136. 23.5
Race1 variable.We will learn to use the arrange function to order the outcome acording to one variable.
Note that this function can be used to order any table by a given outcome. Here is an example that arranges by systolic blood pressure.
NHANES %>% arrange(BPSysAve)
If we want it in descending order we can use the desc function like this:
NHANES %>% arrange(desc(BPSysAve))
In this example, we will compare systolic blood pressure across values of the Race1 variable for males between the ages of 40-49.
NHANES %>% filter(AgeDecade == " 40-49" & Gender == "male") %>% group_by(Race1) %>% summarize(average = mean(BPSysAve, na.rm = TRUE), standard_deviation = sd(BPSysAve, na.rm=TRUE)) %>% arrange(average)
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 5 x 3
## Race1 average standard_deviation
## <fct> <dbl> <dbl>
## 1 White 120. 13.4
## 2 Other 120. 16.2
## 3 Hispanic 122. 11.1
## 4 Mexican 122. 13.9
## 5 Black 126. 17.1
In Section 4, you will look at a case study involving data from the Gapminder Foundation about trends in world health and economics.
After completing Section 4, you will:
The textbook for this section is available here
More about Gapminder
The original Gapminder TED talks are available and we encourage you to watch them.
You can also find more information and raw data (in addition to what we analyze in class) at.
Key points
The textbook for this section is available here
Key points
data(gapminder).Code
# load and inspect gapminder data
data(gapminder)
head(gapminder)
## country year infant_mortality life_expectancy fertility
## 1 Albania 1960 115.40 62.87 6.19
## 2 Algeria 1960 148.20 47.50 7.65
## 3 Angola 1960 208.00 35.98 7.32
## 4 Antigua and Barbuda 1960 NA 62.97 4.43
## 5 Argentina 1960 59.87 65.39 3.11
## 6 Armenia 1960 NA 66.86 4.55
## population gdp continent region
## 1 1636054 NA Europe Southern Europe
## 2 11124892 13828152297 Africa Northern Africa
## 3 5270844 NA Africa Middle Africa
## 4 54681 NA Americas Caribbean
## 5 20619075 108322326649 Americas South America
## 6 1867396 NA Asia Western Asia
# compare infant mortality in Sri Lanka and Turkey
gapminder %>%
filter(year == 2015 & country %in% c("Sri Lanka", "Turkey")) %>%
select(country, infant_mortality)
## country infant_mortality
## 1 Sri Lanka 8.4
## 2 Turkey 11.6
The textbook for this section is available here
Key points
Code
# basic scatterplot of life expectancy versus fertility
ds_theme_set() # set plot theme
filter(gapminder, year == 1962) %>%
ggplot(aes(fertility, life_expectancy)) +
geom_point()
# add color as continent
filter(gapminder, year == 1962) %>%
ggplot(aes(fertility, life_expectancy, color = continent)) +
geom_point()
The textbook for this section is available here
Key points
facet_grid function allows faceting by up to two variables, with rows faceted by one variable and columns faceted by the other variable. To facet by only one variable, use the dot operator as the other variable.facet_wrap function facets by one variable and automatically wraps the series of plots so they have readable dimensions.Code
# facet by continent and year
filter(gapminder, year %in% c(1962, 2012)) %>%
ggplot(aes(fertility, life_expectancy, col = continent)) +
geom_point() +
facet_grid(continent ~ year)
# facet by year only
filter(gapminder, year %in% c(1962, 2012)) %>%
ggplot(aes(fertility, life_expectancy, col = continent)) +
geom_point() +
facet_grid(. ~ year)
# facet by year, plots wrapped onto multiple rows
years <- c(1962, 1980, 1990, 2000, 2012)
continents <- c("Europe", "Asia")
gapminder %>%
filter(year %in% years & continent %in% continents) %>%
ggplot(aes(fertility, life_expectancy, col = continent)) +
geom_point() +
facet_wrap(~year)
The textbook for this section is available here
Key points
geom_line geometry connects adjacent data points to form a continuous line. A line plot is appropriate when points are regularly spaced, densely packed and from a single data series.geom_text, specifying the coordinates where the label should appear on the graph.Code: Single time series
# scatterplot of US fertility by year
gapminder %>%
filter(country == "United States") %>%
ggplot(aes(year, fertility)) +
geom_point()
## Warning: Removed 1 rows containing missing values (geom_point).
# line plot of US fertility by year
gapminder %>%
filter(country == "United States") %>%
ggplot(aes(year, fertility)) +
geom_line()
## Warning: Removed 1 row(s) containing missing values (geom_path).
Code: Multiple time series
# line plot fertility time series for two countries- only one line (incorrect)
countries <- c("South Korea", "Germany")
gapminder %>% filter(country %in% countries) %>%
ggplot(aes(year, fertility)) +
geom_line()
## Warning: Removed 2 row(s) containing missing values (geom_path).
# line plot fertility time series for two countries - one line per country
gapminder %>% filter(country %in% countries) %>%
ggplot(aes(year, fertility, group = country)) +
geom_line()
## Warning: Removed 2 row(s) containing missing values (geom_path).
# fertility time series for two countries - lines colored by country
gapminder %>% filter(country %in% countries) %>%
ggplot(aes(year, fertility, col = country)) +
geom_line()
## Warning: Removed 2 row(s) containing missing values (geom_path).
Code: Adding text labels to a plot
# life expectancy time series - lines colored by country and labeled, no legend
labels <- data.frame(country = countries, x = c(1975, 1965), y = c(60, 72))
gapminder %>% filter(country %in% countries) %>%
ggplot(aes(year, life_expectancy, col = country)) +
geom_line() +
geom_text(data = labels, aes(x, y, label = country), size = 5) +
theme(legend.position = "none")
The textbook for this section is available here and [here(https://rafalab.github.io/dsbook/gapminder.html#visualizing-multimodal-distributions)
Key points
scale_x_continuous or scale_x_log10 layers in ggplot2. Similar functions exist for the y-axis.Code
# add dollars per day variable
gapminder <- gapminder %>%
mutate(dollars_per_day = gdp/population/365)
# histogram of dollars per day
past_year <- 1970
gapminder %>%
filter(year == past_year & !is.na(gdp)) %>%
ggplot(aes(dollars_per_day)) +
geom_histogram(binwidth = 1, color = "black")
# repeat histogram with log2 scaled data
gapminder %>%
filter(year == past_year & !is.na(gdp)) %>%
ggplot(aes(log2(dollars_per_day))) +
geom_histogram(binwidth = 1, color = "black")
# repeat histogram with log2 scaled x-axis
gapminder %>%
filter(year == past_year & !is.na(gdp)) %>%
ggplot(aes(dollars_per_day)) +
geom_histogram(binwidth = 1, color = "black") +
scale_x_continuous(trans = "log2")
The textbook for this section is available here. Note that many boxplots from the video are instead dot plots in the textbook and that a different boxplot is constructed in the textbook. Also read that section to see an example of grouping factors with the case_when function.
Key points
geom_boxplot geometry.element_text. You can change the angle and justification of the text labels.reorder function, which changes the order of factor levels based on a related numeric vector. This is a way to ease comparisons.geom_point layer. This adds information beyond the five-number summary to your plot, but too many data points it can obfuscate your message.Code: Boxplot of GDP by region
# add dollars per day variable
gapminder <- gapminder %>%
mutate(dollars_per_day = gdp/population/365)
# number of regions
length(levels(gapminder$region))
## [1] 22
# boxplot of GDP by region in 1970
past_year <- 1970
p <- gapminder %>%
filter(year == past_year & !is.na(gdp)) %>%
ggplot(aes(region, dollars_per_day))
p + geom_boxplot()
# rotate names on x-axis
p + geom_boxplot() +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
Code: The reorder function
# by default, factor order is alphabetical
fac <- factor(c("Asia", "Asia", "West", "West", "West"))
levels(fac)
## [1] "Asia" "West"
# reorder factor by the category means
value <- c(10, 11, 12, 6, 4)
fac <- reorder(fac, value, FUN = mean)
levels(fac)
## [1] "West" "Asia"
Code: Enhanced boxplot ordered by median income, scaled, and showing data
# reorder by median income and color by continent
p <- gapminder %>%
filter(year == past_year & !is.na(gdp)) %>%
mutate(region = reorder(region, dollars_per_day, FUN = median)) %>% # reorder
ggplot(aes(region, dollars_per_day, fill = continent)) + # color by continent
geom_boxplot() +
theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab("")
p
# log2 scale y-axis
p + scale_y_continuous(trans = "log2")
# add data points
p + scale_y_continuous(trans = "log2") + geom_point(show.legend = FALSE)
The textbook for this section is available here. Note that the boxplots are slightly different.
Key points
intersect to find the overlap between two vectors.Code: Histogram of income in West versus developing world, 1970 and 2010
# add dollars per day variable and define past year
gapminder <- gapminder %>%
mutate(dollars_per_day = gdp/population/365)
past_year <- 1970
# define Western countries
west <- c("Western Europe", "Northern Europe", "Southern Europe", "Northern America", "Australia and New Zealand")
# facet by West vs devloping
gapminder %>%
filter(year == past_year & !is.na(gdp)) %>%
mutate(group = ifelse(region %in% west, "West", "Developing")) %>%
ggplot(aes(dollars_per_day)) +
geom_histogram(binwidth = 1, color = "black") +
scale_x_continuous(trans = "log2") +
facet_grid(. ~ group)
# facet by West/developing and year
present_year <- 2010
gapminder %>%
filter(year %in% c(past_year, present_year) & !is.na(gdp)) %>%
mutate(group = ifelse(region %in% west, "West", "Developing")) %>%
ggplot(aes(dollars_per_day)) +
geom_histogram(binwidth = 1, color = "black") +
scale_x_continuous(trans = "log2") +
facet_grid(year ~ group)
Code: Income distribution of West versus developing world, only countries with data
# define countries that have data available in both years
country_list_1 <- gapminder %>%
filter(year == past_year & !is.na(dollars_per_day)) %>% .$country
country_list_2 <- gapminder %>%
filter(year == present_year & !is.na(dollars_per_day)) %>% .$country
country_list <- intersect(country_list_1, country_list_2)
# make histogram including only countries with data available in both years
gapminder %>%
filter(year %in% c(past_year, present_year) & country %in% country_list) %>% # keep only selected countries
mutate(group = ifelse(region %in% west, "West", "Developing")) %>%
ggplot(aes(dollars_per_day)) +
geom_histogram(binwidth = 1, color = "black") +
scale_x_continuous(trans = "log2") +
facet_grid(year ~ group)
Code: Boxplots of income in West versus developing world, 1970 and 2010
p <- gapminder %>%
filter(year %in% c(past_year, present_year) & country %in% country_list) %>%
mutate(region = reorder(region, dollars_per_day, FUN = median)) %>%
ggplot() +
theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab("") + scale_y_continuous(trans = "log2")
p + geom_boxplot(aes(region, dollars_per_day, fill = continent)) +
facet_grid(year ~ .)
# arrange matching boxplots next to each other, colored by year
p + geom_boxplot(aes(region, dollars_per_day, fill = factor(year)))
The textbook for this section is available:
Key points
..count.. as the y argument.case_when function defines a factor whose levels are defined by a variety of logical operations to group data.position="stack".aesthetic mapping to change the relative weights of density plots - for example, this allows weighting of plots by population rather than number of countries.Code: Faceted smooth density plots
# smooth density plots - area under each curve adds to 1
gapminder %>%
filter(year == past_year & country %in% country_list) %>%
mutate(group = ifelse(region %in% west, "West", "Developing")) %>% group_by(group) %>%
summarize(n = n()) %>% knitr::kable()
## `summarise()` ungrouping output (override with `.groups` argument)
| group | n |
|---|---|
| Developing | 87 |
| West | 21 |
# smooth density plots - variable counts on y-axis
p <- gapminder %>%
filter(year == past_year & country %in% country_list) %>%
mutate(group = ifelse(region %in% west, "West", "Developing")) %>%
ggplot(aes(dollars_per_day, y = ..count.., fill = group)) +
scale_x_continuous(trans = "log2")
p + geom_density(alpha = 0.2, bw = 0.75) + facet_grid(year ~ .)
Code: Add new region groups with case_when
# add group as a factor, grouping regions
gapminder <- gapminder %>%
mutate(group = case_when(
.$region %in% west ~ "West",
.$region %in% c("Eastern Asia", "South-Eastern Asia") ~ "East Asia",
.$region %in% c("Caribbean", "Central America", "South America") ~ "Latin America",
.$continent == "Africa" & .$region != "Northern Africa" ~ "Sub-Saharan Africa",
TRUE ~ "Others"))
# reorder factor levels
gapminder <- gapminder %>%
mutate(group = factor(group, levels = c("Others", "Latin America", "East Asia", "Sub-Saharan Africa", "West")))
Code: Stacked density plot
# note you must redefine p with the new gapminder object first
p <- gapminder %>%
filter(year %in% c(past_year, present_year) & country %in% country_list) %>%
ggplot(aes(dollars_per_day, fill = group)) +
scale_x_continuous(trans = "log2")
# stacked density plot
p + geom_density(alpha = 0.2, bw = 0.75, position = "stack") +
facet_grid(year ~ .)
Code: Weighted stacked density plot
# weighted stacked density plot
gapminder %>%
filter(year %in% c(past_year, present_year) & country %in% country_list) %>%
group_by(year) %>%
mutate(weight = population/sum(population*2)) %>%
ungroup() %>%
ggplot(aes(dollars_per_day, fill = group, weight = weight)) +
scale_x_continuous(trans = "log2") +
geom_density(alpha = 0.2, bw = 0.75, position = "stack") + facet_grid(year ~ .)
The textbook for this section is available here
Key points
Code
# add additional cases
gapminder <- gapminder %>%
mutate(group = case_when(
.$region %in% west ~ "The West",
.$region %in% "Northern Africa" ~ "Northern Africa",
.$region %in% c("Eastern Asia", "South-Eastern Asia") ~ "East Asia",
.$region == "Southern Asia" ~ "Southern Asia",
.$region %in% c("Central America", "South America", "Caribbean") ~ "Latin America",
.$continent == "Africa" & .$region != "Northern Africa" ~ "Sub-Saharan Africa",
.$region %in% c("Melanesia", "Micronesia", "Polynesia") ~ "Pacific Islands"))
# define a data frame with group average income and average infant survival rate
surv_income <- gapminder %>%
filter(year %in% present_year & !is.na(gdp) & !is.na(infant_mortality) & !is.na(group)) %>%
group_by(group) %>%
summarize(income = sum(gdp)/sum(population)/365,
infant_survival_rate = 1 - sum(infant_mortality/1000*population)/sum(population))
## `summarise()` ungrouping output (override with `.groups` argument)
surv_income %>% arrange(income)
## # A tibble: 7 x 3
## group income infant_survival_rate
## <chr> <dbl> <dbl>
## 1 Sub-Saharan Africa 1.76 0.936
## 2 Southern Asia 2.07 0.952
## 3 Pacific Islands 2.70 0.956
## 4 Northern Africa 4.94 0.970
## 5 Latin America 13.2 0.983
## 6 East Asia 13.4 0.985
## 7 The West 77.1 0.995
# plot infant survival versus income, with transformed axes
surv_income %>% ggplot(aes(income, infant_survival_rate, label = group, color = group)) +
scale_x_continuous(trans = "log2", limit = c(0.25, 150)) +
scale_y_continuous(trans = "logit", limit = c(0.875, .9981),
breaks = c(.85, .90, .95, .99, .995, .998)) +
geom_label(size = 3, show.legend = FALSE)
The Gapminder Foundation is a non-profit organization based in Sweden that promotes global development through the use of statistics that can help reduce misconceptions about global development.
library(dplyr)
library(ggplot2)
library(dslabs)
data(gapminder)
head(gapminder)
country year infant_mortality life_expectancy fertility population
<fctr> <int> <dbl> <dbl> <dbl> <dbl>
1 Albania 1960 115.40 62.87 6.19 1636054
2 Algeria 1960 148.20 47.50 7.65 11124892
3 Angola 1960 208.00 35.98 7.32 5270844
4 Antigua and Barbuda 1960 NA 62.97 4.43 54681
5 Argentina 1960 59.87 65.39 3.11 20619075
6 Armenia 1960 NA 66.86 4.55 1867396
6 rows | 1-7 of 10 columns
## fill out the missing parts in filter and aes
gapminder %>% filter(continent=="Africa" & year=="2012") %>%
ggplot(aes(fertility,life_expectancy)) +
geom_point()
index
Note that there is quite a bit of variability in life expectancy and fertility with some African countries having very high life expectancies. There also appear to be three clusters in the plot.
Remake the plot from the previous exercises but this time use color to dinstinguish the different regions of Africa to see if this explains the clusters. Remember that you can explore the gapminder data to see how the regions of Africa are labeled in the dataframe!
library(dplyr)
library(ggplot2)
library(dslabs)
data(gapminder)
gapminder %>% filter(continent=="Africa" & year=="2012") %>%
ggplot(aes(fertility,life_expectancy, color=region)) +
geom_point()
index
While many of the countries in the high life expectancy/low fertility cluster are from Northern Africa, three countries are not.
library(dplyr)
library(dslabs)
data(gapminder)
head(gapminder)
country year infant_mortality life_expectancy fertility population
<fctr> <int> <dbl> <dbl> <dbl> <dbl>
1 Albania 1960 115.40 62.87 6.19 1636054
2 Algeria 1960 148.20 47.50 7.65 11124892
3 Angola 1960 208.00 35.98 7.32 5270844
4 Antigua and Barbuda 1960 NA 62.97 4.43 54681
5 Argentina 1960 59.87 65.39 3.11 20619075
6 Armenia 1960 NA 66.86 4.55 1867396
6 rows | 1-7 of 10 columns
df <- gapminder %>% filter(continent=="Africa" & year=="2012" & fertility <=3 & life_expectancy>=70) %>%
select(country,region)
The Vietnam War lasted from 1955 to 1975. Do the data support war having a negative effect on life expectancy? We will create a time series plot that covers the period from 1960 to 2010 of life expectancy for Vietnam and the United States, using color to distinguish the two countries. In this start we start the analysis by generating a table.
library(dplyr)
library(dslabs)
data(gapminder)
head(gapminder)
country year infant_mortality life_expectancy fertility population
<fctr> <int> <dbl> <dbl> <dbl> <dbl>
1 Albania 1960 115.40 62.87 6.19 1636054
2 Algeria 1960 148.20 47.50 7.65 11124892
3 Angola 1960 208.00 35.98 7.32 5270844
4 Antigua and Barbuda 1960 NA 62.97 4.43 54681
5 Argentina 1960 59.87 65.39 3.11 20619075
6 Armenia 1960 NA 66.86 4.55 1867396
6 rows | 1-7 of 10 columns
tab <- gapminder %>% filter(year>=1960 & year<=2010 & country%in%c("Vietnam","United States"))
Now that you have created the data table in Exercise 4, it is time to plot the data for the two countries.
p <- tab %>% ggplot(aes(year,life_expectancy,color=country)) + geom_line()
p
index
Cambodia was also involved in this conflict and, after the war, Pol Pot and his communist Khmer Rouge took control and ruled Cambodia from 1975 to 1979. He is considered one of the most brutal dictators in history. Do the data support this claim?
Use a single line of code to create a time series plot from 1960 to 2010 of life expectancy vs year for Cambodia.
library(dplyr)
library(ggplot2)
library(dslabs)
data(gapminder)
gapminder %>% filter(year>=1960 & year <= 2010 & country=="Cambodia") %>% ggplot(aes(year,life_expectancy)) + geom_line()
index
Now we are going to calculate and plot dollars per day for African countries in 2010 using GDP data.
In the first part of this analysis, we will create the dollars per day variable. - Use mutate to create a dollars_per_day variable, which is defined as gdp/population/365. - Create the dollars_per_day variable for African countries for the year 2010. - Remove any NA values. - Save the mutated dataset as daydollars.
library(dplyr)
library(dslabs)
data(gapminder)
daydollars <- gapminder %>% mutate(dollars_per_day=gdp/population/365)%>% filter(year==2010 & continent=="Africa" & !is.na(dollars_per_day))
Now we are going to calculate and plot dollars per day for African countries in 2010 using GDP data.
In the second part of this analysis, we will plot the smooth density plot using a log (base 2) x axis. - The dataset including the dollars_per_day variable is preloaded as daydollars. - Create a smooth density plot of dollars per day from daydollars. - Use a log (base 2) scale for the x axis.
daydollars %>% ggplot(aes(dollars_per_day)) + geom_density() + scale_x_continuous(trans="log2")
index
Now we are going to combine the plotting tools we have used in the past two exercises to create density plots for multiple years. - Create the dollars_per_day variable as in Exercise 7, but for African countries in the years 1970 and 2010 this time. - Make sure you remove any NA values. - Create a smooth density plot of dollars per day for 1970 and 2010 using a log (base 2) scale for the x axis. - Use facet_grid to show a different density plot for 1970 and 2010.
library(dplyr)
library(ggplot2)
library(dslabs)
data(gapminder)
daydollars <- gapminder %>% mutate(dollars_per_day=gdp/population/365)%>% filter(year %in% c(1970,2010) & continent=="Africa" & !is.na(dollars_per_day))
daydollars %>% ggplot(aes(dollars_per_day)) + geom_density() + scale_x_continuous(trans='log2') + facet_grid(.~year)
index
Now we are going to edit the code from Exercise 9 to show stacked histograms of each region in Africa.
Much of the code will be the same as in Exercise 9: - Create the dollars_per_day variable as in Exercise 7, but for African countries in the years 1970 and 2010 this time. - Make sure you remove any NA values. - Create a smooth density plot of dollars per day for 1970 and 2010 using a log (base 2) scale for the x axis. - Use facet_grid to show a different density plot for 1970 and 2010. - Make sure the densities are smooth by using bw = 0.5. - Use the fill and position arguments where appropriate to create the stacked histograms of each region.
library(dplyr)
library(ggplot2)
library(dslabs)
data(gapminder)
daydollars <- gapminder %>% mutate(dollars_per_day=gdp/population/365)%>% filter(year %in% c(1970,2010) & continent=="Africa" & !is.na(dollars_per_day))
daydollars %>% ggplot(aes(dollars_per_day,fill = region)) + geom_density(bw=0.5,position='stack') + scale_x_continuous(trans='log2') + facet_grid(.~year)
index
We are going to continue looking at patterns in the gapminder dataset by plotting infant mortality rates versus dollars per day for African countries. - Generate dollars_per_day using mutate and filter for the year 2010 for African countries. - Remember to remove NA values. - Store the mutated dataset in gapminder_Africa_2010. - Make a scatter plot of infant_mortaility versus dollars_per_day for countries in the African continent. - Use color to denote the different regions of Africa.
library(dplyr)
library(ggplot2)
library(dslabs)
data(gapminder)
gapminder_Africa_2010 <- daydollars <- gapminder %>% mutate(dollars_per_day=gdp/population/365)%>% filter(year %in% c(2010) & continent=="Africa" & !is.na(dollars_per_day))
# now make the scatter plot
gapminder_Africa_2010 %>% ggplot(aes(dollars_per_day,infant_mortality,color = region)) + geom_point()
index
Now we are going to transform the x axis of the plot from the previous exercise. - The mutated dataset is preloaded as gapminder_Africa_2010. - As in the previous exercise, make a scatter plot of infant_mortaility versus dollars_per_day for countries in the African continent. - As in the previous exercise, use color to denote the different regions of Africa. - Transform the x axis to be in the log (base 2) scale.
gapminder_Africa_2010 %>% ggplot(aes(dollars_per_day,infant_mortality, color=region)) + geom_point() + scale_x_continuous(trans='log2')
index
Note that there is a large variation in infant mortality and dollars per day among African countries.
As an example, one country has infant mortality rates of less than 20 per 1000 and dollars per day of 16, while another country has infant mortality rates over 10% and dollars per day of about 1.
In this exercise, we will remake the plot from Exercise 12 with country names instead of points so we can identify which countries are which. - The mutated dataset is preloaded as gapminder_Africa_2010. - As in the previous exercise, make a scatter plot of infant_mortaility versus dollars_per_day for countries in the African continent. - As in the previous exercise, use color to denote the different regions of Africa. - As in the previous exercise, transform the x axis to be in the log (base 2) scale. - Add a layer to display country names instead of points.
gapminder_Africa_2010 %>% ggplot(aes(dollars_per_day,infant_mortality, color=region,label = country)) + geom_point() + scale_x_continuous(trans='log2') +
geom_text()
index
Now we are going to look at changes in the infant mortality and dollars per day patterns African countries between 1970 and 2010. - Generate dollars_per_day using mutate and filter for the years 1970 and 2010 for African countries. - Remember to remove NA values. - As in the previous exercise, make a scatter plot of infant_mortaility versus dollars_per_day for countries in the African continent. - As in the previous exercise, use color to denote the different regions of Africa. - As in the previous exercise, transform the x axis to be in the log (base 2) scale. - As in the previous exercise, add a layer to display country names instead of points. - Use facet_grid to show different plots for 1970 and 2010.
library(dplyr)
library(ggplot2)
library(dslabs)
data(gapminder)
gapminder %>%
filter(continent == "Africa" & year %in% c(1970,2010) & !is.na(gdp) & !is.na(year) & !is.na(infant_mortality)) %>%
mutate(dollars_per_day = gdp/population/365) %>%
ggplot(aes(dollars_per_day, infant_mortality, color = region,label = country)) +
geom_point() +
scale_x_continuous(trans = "log2") +
geom_text() +
facet_grid(year~.)
index
Section 5 covers some general principles that can serve as guides for effective data visualization.
After completing Section 5, you will: - understand basic principles of effective data visualization. - understand the importance of keeping your goal in mind when deciding on a visualization approach. - understand principles for encoding data, including position, aligned lengths, angles, area, brightness, and color hue. - know when to include the number zero in visualizations. - be able to use techniques to ease comparisons, such as using common axes, putting visual cues to be compared adjacent to one another, and using color effectively.
The textbook for this section is available here
1: Customizing plots - Pie charts
Pie charts are appropriate: - [ ] A. When we want to display percentages. - [ ] B. When ggplot2 is not available. - [ ] C. When I am in a bakery. - [X] D. Never. Barplots and tables are always better.
What is the problem with this plot?
index
Take a look at the following two plots. They show the same information: rates of measles by state in the United States for 1928.
index
1: Customizing plots - watch and learn
To make the plot on the right in the exercise from the last set of assessments, we had to reorder the levels of the states’ variables. - Redefine the state object so that the levels are re-ordered by rate. - Print the new object state and its levels so you can see that the vector is now re-ordered by the levels.
library(dplyr)
library(ggplot2)
library(dslabs)
dat <- us_contagious_diseases %>%
filter(year == 1967 & disease=="Measles" & !is.na(population)) %>% mutate(rate = count / population * 10000 * 52 / weeks_reporting)
state <- dat$state
rate <- dat$count/(dat$population/10000)*(52/dat$weeks_reporting)
state <- reorder(state,rate)
print(state)
## [1] Alabama Alaska Arizona
## [4] Arkansas California Colorado
## [7] Connecticut Delaware District Of Columbia
## [10] Florida Georgia Hawaii
## [13] Idaho Illinois Indiana
## [16] Iowa Kansas Kentucky
## [19] Louisiana Maine Maryland
## [22] Massachusetts Michigan Minnesota
## [25] Mississippi Missouri Montana
## [28] Nebraska Nevada New Hampshire
## [31] New Jersey New Mexico New York
## [34] North Carolina North Dakota Ohio
## [37] Oklahoma Oregon Pennsylvania
## [40] Rhode Island South Carolina South Dakota
## [43] Tennessee Texas Utah
## [46] Vermont Virginia Washington
## [49] West Virginia Wisconsin Wyoming
## attr(,"scores")
## Alabama Alaska Arizona
## 4.16107582 5.46389893 6.32695891
## Arkansas California Colorado
## 6.87899954 2.79313560 7.96331905
## Connecticut Delaware District Of Columbia
## 0.36986840 1.13098183 0.35873614
## Florida Georgia Hawaii
## 2.89358806 0.09987991 2.50173748
## Idaho Illinois Indiana
## 6.03115170 1.20115480 1.34027323
## Iowa Kansas Kentucky
## 2.94948911 0.66386422 4.74576011
## Louisiana Maine Maryland
## 0.46088071 2.57520433 0.49922233
## Massachusetts Michigan Minnesota
## 0.74762338 1.33466700 0.37722410
## Mississippi Missouri Montana
## 3.11366532 0.75696354 5.00433320
## Nebraska Nevada New Hampshire
## 3.64389801 6.43683882 0.47181511
## New Jersey New Mexico New York
## 0.88414264 6.15969926 0.66849058
## North Carolina North Dakota Ohio
## 1.92529764 14.48024642 1.16382241
## Oklahoma Oregon Pennsylvania
## 3.27496900 8.75036439 0.67687303
## Rhode Island South Carolina South Dakota
## 0.68207448 2.10412531 0.90289534
## Tennessee Texas Utah
## 5.47344506 12.49773953 4.03005836
## Vermont Virginia Washington
## 1.00970314 5.28270939 17.65180349
## West Virginia Wisconsin Wyoming
## 8.59456463 4.96246019 6.97303449
## 51 Levels: Georgia District Of Columbia Connecticut ... Washington
levels(state)
## [1] "Georgia" "District Of Columbia" "Connecticut"
## [4] "Minnesota" "Louisiana" "New Hampshire"
## [7] "Maryland" "Kansas" "New York"
## [10] "Pennsylvania" "Rhode Island" "Massachusetts"
## [13] "Missouri" "New Jersey" "South Dakota"
## [16] "Vermont" "Delaware" "Ohio"
## [19] "Illinois" "Michigan" "Indiana"
## [22] "North Carolina" "South Carolina" "Hawaii"
## [25] "Maine" "California" "Florida"
## [28] "Iowa" "Mississippi" "Oklahoma"
## [31] "Nebraska" "Utah" "Alabama"
## [34] "Kentucky" "Wisconsin" "Montana"
## [37] "Virginia" "Alaska" "Tennessee"
## [40] "Idaho" "New Mexico" "Arizona"
## [43] "Nevada" "Arkansas" "Wyoming"
## [46] "Colorado" "West Virginia" "Oregon"
## [49] "Texas" "North Dakota" "Washington"
Now we are going to customize this plot a little more by creating a rate variable and reordering by that variable instead. - Add a single line of code to the definition of the dat table that uses mutate to reorder the states by the rate variable. - The sample code provided will then create a bar plot using the newly defined dat.
library(dplyr)
library(ggplot2)
library(dslabs)
data(us_contagious_diseases)
dat <- us_contagious_diseases %>% filter(year == 1967 & disease=="Measles" & count>0 & !is.na(population)) %>%
mutate(rate = count / population * 10000 * 52 / weeks_reporting) %>% mutate(state = reorder(state, rate))
dat %>% ggplot(aes(state, rate)) +
geom_bar(stat="identity") +
coord_flip()
index
Say we are interested in comparing gun homicide rates across regions of the US. We see this plot:
library(dplyr)
library(ggplot2)
library(dslabs)
data("murders")
murders %>% mutate(rate = total/population*100000) %>%
group_by(region) %>%
summarize(avg = mean(rate)) %>%
mutate(region = factor(region)) %>%
ggplot(aes(region, avg)) +
geom_bar(stat="identity") +
ylab("Murder Rate Average")
index
and decide to move to a state in the western region. What is the main problem with this interpretaion? - [ ] A. The categories are ordered alphabetically. - [ ] B. The graph does not show standard errors. - [X] C. It does not show all the data. We do not see the variability within a region and it’s possible that the safest states are not in the West. - [ ] D. The Northeast has the lowest average.
To further investigate whether moving to the western region is a wise decision, let’s make a box plot of murder rates by region, showing all points. - Make a box plot of the murder rates by region. - Order the regions by their median murder rate. - Show all of the points on the box plot.
library(dplyr)
library(ggplot2)
library(dslabs)
data("murders")
murders %>% mutate(rate = total/population*100000) %>%
mutate(region=reorder(region, rate, FUN=median)) %>%
ggplot(aes(region, rate)) +
geom_boxplot() +
geom_point()
index
The sample code given creates a tile plot showing the rate of measles cases per population. We are going to modify the tile plot to look at smallpox cases instead.
if(!require(RColorBrewer)) install.packages("RColorBrewer")
library(dplyr)
library(ggplot2)
library(RColorBrewer)
library(dslabs)
data(us_contagious_diseases)
head(us_contagious_diseases)
disease state year weeks_reporting count population
<fctr> <fctr> <dbl> <int> <dbl> <dbl>
1 Hepatitis A Alabama 1966 50 321 3345787
2 Hepatitis A Alabama 1967 49 291 3364130
3 Hepatitis A Alabama 1968 52 314 3386068
4 Hepatitis A Alabama 1969 49 380 3412450
5 Hepatitis A Alabama 1970 51 413 3444165
6 Hepatitis A Alabama 1971 51 378 3481798
6 rows
the_disease = "Measles"
dat <- us_contagious_diseases %>%
filter(!state%in%c("Hawaii","Alaska") & disease == the_disease) %>%
mutate(rate = count / population * 10000) %>%
mutate(state = reorder(state, rate))
dat %>% ggplot(aes(year, state, fill = rate)) +
geom_tile(color = "grey50") +
scale_x_continuous(expand=c(0,0)) +
scale_fill_gradientn(colors = brewer.pal(9, "Reds"), trans = "sqrt") +
theme_minimal() +
theme(panel.grid = element_blank()) +
ggtitle(the_disease) +
ylab("") +
xlab("")
index
library(dplyr)
library(ggplot2)
library(RColorBrewer)
library(dslabs)
data(us_contagious_diseases)
head(us_contagious_diseases)
disease state year weeks_reporting count population
<fctr> <fctr> <dbl> <int> <dbl> <dbl>
1 Hepatitis A Alabama 1966 50 321 3345787
2 Hepatitis A Alabama 1967 49 291 3364130
3 Hepatitis A Alabama 1968 52 314 3386068
4 Hepatitis A Alabama 1969 49 380 3412450
5 Hepatitis A Alabama 1970 51 413 3444165
6 Hepatitis A Alabama 1971 51 378 3481798
6 rows
the_disease = "Smallpox"
dat <- us_contagious_diseases %>%
filter(!state%in%c("Hawaii","Alaska") & disease == the_disease & !weeks_reporting<10) %>%
mutate(rate = count / population * 10000) %>%
mutate(state = reorder(state, rate))
dat %>% ggplot(aes(year, state, fill = rate)) +
geom_tile(color = "grey50") +
scale_x_continuous(expand=c(0,0)) +
scale_fill_gradientn(colors = brewer.pal(9, "Reds"), trans = "sqrt") +
theme_minimal() +
theme(panel.grid = element_blank()) +
ggtitle(the_disease) +
ylab("") +
xlab("")
index
The sample code given creates a time series plot showing the rate of measles cases per population by state. We are going to again modify this plot to look at smallpox cases instead.
library(dplyr)
library(ggplot2)
library(dslabs)
library(RColorBrewer)
data(us_contagious_diseases)
the_disease = "Measles"
dat <- us_contagious_diseases %>%
filter(!state%in%c("Hawaii","Alaska") & disease == the_disease) %>%
mutate(rate = count / population * 10000) %>%
mutate(state = reorder(state, rate))
str(dat)
## 'data.frame': 3724 obs. of 7 variables:
## $ disease : Factor w/ 7 levels "Hepatitis A",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ state : Factor w/ 51 levels "Mississippi",..: 9 9 9 9 9 9 9 9 9 9 ...
## ..- attr(*, "scores")= num [1:51(1d)] 9.27 NA 24.15 9.37 19.16 ...
## .. ..- attr(*, "dimnames")=List of 1
## .. .. ..$ : chr "Alabama" "Alaska" "Arizona" "Arkansas" ...
## $ year : num 1928 1929 1930 1931 1932 ...
## $ weeks_reporting: int 52 49 52 49 41 51 52 49 40 49 ...
## $ count : num 8843 2959 4156 8934 270 ...
## $ population : num 2589923 2619131 2646248 2670818 2693027 ...
## $ rate : num 34.1 11.3 15.7 33.5 1 ...
avg <- us_contagious_diseases %>%
filter(disease==the_disease) %>% group_by(year) %>%
summarize(us_rate = sum(count, na.rm=TRUE)/sum(population, na.rm=TRUE)*10000)
dat %>% ggplot() +
geom_line(aes(year, rate, group = state), color = "grey50",
show.legend = FALSE, alpha = 0.2, size = 1) +
geom_line(mapping = aes(year, us_rate), data = avg, size = 1, color = "black") +
scale_y_continuous(trans = "sqrt", breaks = c(5,25,125,300)) +
ggtitle("Cases per 10,000 by state") +
xlab("") +
ylab("") +
geom_text(data = data.frame(x=1955, y=50), mapping = aes(x, y, label="US average"), color="black") +
geom_vline(xintercept=1963, col = "blue")
index
library(dplyr)
library(ggplot2)
library(dslabs)
library(RColorBrewer)
data(us_contagious_diseases)
the_disease = "Smallpox"
dat <- us_contagious_diseases %>%
filter(!state%in%c("Hawaii","Alaska") & disease == the_disease & !weeks_reporting<10) %>%
mutate(rate = count / population * 10000) %>%
mutate(state = reorder(state, rate))
str(dat)
## 'data.frame': 1014 obs. of 7 variables:
## $ disease : Factor w/ 7 levels "Hepatitis A",..: 7 7 7 7 7 7 7 7 7 7 ...
## $ state : Factor w/ 51 levels "Rhode Island",..: 17 17 17 17 17 17 17 17 17 17 ...
## ..- attr(*, "scores")= num [1:51(1d)] 0.382 NA 2.011 0.805 0.924 ...
## .. ..- attr(*, "dimnames")=List of 1
## .. .. ..$ : chr "Alabama" "Alaska" "Arizona" "Arkansas" ...
## $ year : num 1928 1929 1930 1931 1932 ...
## $ weeks_reporting: int 51 52 52 52 52 52 52 52 51 52 ...
## $ count : num 341 378 192 295 467 82 23 42 12 54 ...
## $ population : num 2589923 2619131 2646248 2670818 2693027 ...
## $ rate : num 1.317 1.443 0.726 1.105 1.734 ...
avg <- us_contagious_diseases %>%
filter(disease==the_disease) %>% group_by(year) %>%
summarize(us_rate = sum(count, na.rm=TRUE)/sum(population, na.rm=TRUE)*10000)
dat %>% ggplot() +
geom_line(aes(year, rate, group = state), color = "grey50",
show.legend = FALSE, alpha = 0.2, size = 1) +
geom_line(mapping = aes(year, us_rate), data = avg, size = 1, color = "black") +
scale_y_continuous(trans = "sqrt", breaks = c(5,25,125,300)) +
ggtitle("Cases per 10,000 by state") +
xlab("") +
ylab("") +
geom_text(data = data.frame(x=1955, y=50), mapping = aes(x, y, label="US average"), color="black") +
geom_vline(xintercept=1963, col = "blue")
index
Now we are going to look at the rates of all diseases in one state. Again, you will be modifying the sample code to produce the desired plot. - For the state of California, make a time series plot showing rates for all diseases. - Include only years with 10 or more weeks reporting. - Use a different color for each disease.
library(dplyr)
library(ggplot2)
library(dslabs)
library(RColorBrewer)
data(us_contagious_diseases)
us_contagious_diseases %>% filter(state=="California" & !weeks_reporting<10) %>%
group_by(year, disease) %>%
summarize(rate = sum(count)/sum(population)*10000) %>%
ggplot(aes(year, rate,color=disease)) +
geom_line()
index
Now we are going to make a time series plot for the rates of all diseases in the United States. For this exercise, we have provided less sample code - you can take a look at the previous exercise to get you started. - Compute the US rate by using summarize to sum over states. - The US rate for each disease will be the total number of cases divided by the total population. - Remember to convert to cases per 10,000. - You will need to filter for !is.na(population) to get all the data. - Plot each disease in a different color.
library(dplyr)
library(ggplot2)
library(dslabs)
library(RColorBrewer)
data(us_contagious_diseases)
us_contagious_diseases %>% filter(!is.na(population)) %>%
group_by(year, disease) %>%
summarize(rate=sum(count)/sum(population)*10000) %>%
ggplot(aes(year, rate,color=disease)) + geom_line()
index
Background
Astronomy is one of the oldest data-driven sciences. In the late 1800s, the director of the Harvard College Observatory hired women to analyze astronomical data, which at the time was done using photographic glass plates. These women became known as the Harvard Computers. They computed the position and luminosity of various astronomical objects such as stars and galaxies. (If you are interested, you can learn more about the Harvard Computers). Today, astronomy is even more of a data-driven science, with an inordinate amount of data being produced by modern instruments every day.
In the following exercises we will analyze some actual astronomical data to inspect properties of stars, their absolute magnitude (which relates to a star’s luminosity, or brightness), temperature and type (spectral class).
Libraries and Options
#update.packages()
library(tidyverse)
library(dslabs)
data(stars)
options(digits = 3) # report 3 significant digits
Question 1
Load the stars data frame from dslabs. This contains the name, absolute magnitude, temperature in degrees Kelvin, and spectral class of selected stars. Absolute magnitude (shortened in these problems to simply “magnitude”) is a function of star luminosity, where negative values of magnitude have higher luminosity.
# What is the mean magnitude?
mean(stars$magnitude)
## [1] 4.26
# What is the standard deviation of magnitude?
sd(stars$magnitude)
## [1] 7.35
Question 2
Make a density plot of the magnitude.
stars %>%
ggplot(aes(magnitude)) +
geom_density()
# How many peaks are there in the data?
# A: 2
Question 3
Examine the distribution of star temperature. Which of these statements best characterizes the temperature distribution?
stars %>%
ggplot(aes(temp)) +
geom_density()
# How many peaks are there in the data?
# A: 2
Question 4
Make a scatter plot of the data with temperature on the x-axis and magnitude on the y-axis and examine the relationship between the variables. Recall that lower magnitude means a more luminous (brighter) star.
stars %>%
ggplot(aes(x=temp, y=magnitude)) +
geom_point()
Question 5
For various reasons, scientists do not always follow straight conventions when making plots, and astronomers usually transform values of star luminosity and temperature before plotting. Flip the y-axis so that lower values of magnitude are at the top of the axis (recall that more luminous stars have lower magnitude) using scale_y_reverse. Take the log base 10 of temperature and then also flip the x-axis.
Fill in the blanks in the statements below to describe the resulting plot:
The brighest, highest temperature stars are in the ______________ corner of the plot.
stars %>%
ggplot(aes(x=log10(temp), y=magnitude)) +
scale_y_reverse() +
scale_x_reverse() +
geom_point()
Question 6
The trends you see allow scientists to learn about the evolution and lifetime of stars. The primary group of stars to which most stars belong (see question 4) we will call the main sequence stars. Most stars belong to this main sequence, however some of the more rare stars are classified as old and evolved stars. These stars tend to be hotter stars, but also have low luminosity, and are known as white dwarfs.
How many white dwarfs are there in our sample?
A: 4
Question 7
Consider stars which are not part of the Main Group but are not old/evolved (white dwarf) stars. These stars must also be unique in certain ways and are known as giants. Use the plot from Question 5 to estimate the average temperature of a giant.
Which of these temperatures is closest to the average temperature of a giant?: A: 5000K
Question 8
We can now identify whether specific stars are main sequence stars, red giants or white dwarfs. Add text labels to the plot to answer these questions. You may wish to plot only a selection of the labels, repel the labels, or zoom in on the plot in RStudio so you can locate specific stars.
Fill in the blanks in the statements below:
library(ggrepel)
stars %>%
ggplot(aes(x=log10(temp), y=magnitude, label=star)) +
scale_y_reverse() +
scale_x_reverse() +
geom_point() +
geom_text(aes(label=star)) +
geom_text_repel()
# The least lumninous star in the sample with a surface temperature over 5000K is _________.
# A: van Maanens Star
# The two stars with lowest temperature and highest luminosity are known as supergiants. The two supergiants in this dataset are ____________.
# A: Betelgeuse and Antares
# The Sun is a ______________.
# A: main sequence star
stars %>%
filter(star=='Sun') %>%
select_all()
## star magnitude temp type
## 1 Sun 4.8 5840 G
Question 9
Remove the text labels and color the points by star type. This classification describes the properties of the star’s spectrum, the amount of light produced at various wavelengths.
stars %>%
ggplot(aes(x=log10(temp), y=magnitude, color=type)) +
scale_y_reverse() +
scale_x_reverse() +
geom_point()
# Which star type has the lowest temperature?
Background
The planet’s surface temperature is increasing due to human greenhouse gas emissions, and this global warming and carbon cycle disruption is wreaking havoc on natural systems. Living systems that depend on current temperature, weather, currents and carbon balance are jeopardized, and human society will be forced to contend with widespread economic, social, political and environmental damage as the temperature continues to rise. Although most countries recognize that global warming is a crisis and that humans must act to limit its effects, little action has been taken to limit or reverse human impact on the climate.
One limitation is the spread of misinformation related to climate change and its causes, especially the extent to which humans have contributed to global warming. In these exercises, we examine the relationship between global temperature changes, greenhouse gases and human carbon emissions using time series of actual atmospheric and ice core measurements from the National Oceanic and Atmospheric Administration (NOAA) and Carbon Dioxide Information Analysis Center (CDIAC).
Libraries and Options
#update.packages()
library(tidyverse)
library(dslabs)
data(temp_carbon)
data(greenhouse_gases)
data(historic_co2)
Question 1
Load the temp_carbon dataset from dslabs, which contains annual global temperature anomalies (difference from 20th century mean temperature in degrees Celsius), temperature anomalies over the land and ocean, and global carbon emissions (in metric tons). Note that the date ranges differ for temperature and carbon emissions.
Which of these code blocks return the latest year for which carbon emissions are reported?
str(temp_carbon)
## 'data.frame': 268 obs. of 5 variables:
## $ year : num 1880 1881 1882 1883 1884 ...
## $ temp_anomaly : num -0.11 -0.08 -0.1 -0.18 -0.26 -0.25 -0.24 -0.28 -0.13 -0.09 ...
## $ land_anomaly : num -0.48 -0.4 -0.48 -0.66 -0.69 -0.56 -0.51 -0.47 -0.41 -0.31 ...
## $ ocean_anomaly : num -0.01 0.01 0 -0.04 -0.14 -0.17 -0.17 -0.23 -0.05 -0.02 ...
## $ carbon_emissions: num 236 243 256 272 275 277 281 295 327 327 ...
temp_carbon %>%
.$year %>%
max()
## [1] 2018
temp_carbon %>%
filter(!is.na(carbon_emissions)) %>%
pull(year) %>%
max()
## [1] 2014
#temp_carbon %>%
# filter(!is.na(carbon_emissions)) %>%
# max(year)
temp_carbon %>%
filter(!is.na(carbon_emissions)) %>%
.$year %>%
max()
## [1] 2014
temp_carbon %>%
filter(!is.na(carbon_emissions)) %>%
select(year) %>%
max()
## [1] 2014
#temp_carbon %>%
# filter(!is.na(carbon_emissions)) %>%
# max(.$year)
Question 2
Inspect the difference in carbon emissions in temp_carbon from the first available year to the last available year.
# What is the first year for which carbon emissions (carbon_emissions) data are available?
year_min <- temp_carbon %>%
filter(!is.na(carbon_emissions)) %>%
.$year %>%
min()
# What is the last year for which carbon emissions data are available?
year_max <- temp_carbon %>%
filter(!is.na(carbon_emissions)) %>%
.$year %>%
max()
# How many times larger were carbon emissions in the last year relative to the first year?
ratio <- temp_carbon %>%
filter(year %in% c(year_min, year_max)) %>%
.$carbon_emissions
#A:
ratio[1] / ratio[2]
## [1] 3285
# Scatter plot
temp_carbon %>%
filter(!is.na(carbon_emissions)) %>%
ggplot(aes(x=year, y=carbon_emissions)) +
geom_point()
Question 3
Inspect the difference in temperature in temp_carbon from the first available year to the last available year.
# What is the first year for which global temperature anomaly (temp_anomaly) data are available?
year_min <- temp_carbon %>%
filter(!is.na(temp_anomaly)) %>%
.$year %>%
min()
year_min
## [1] 1880
# What is the last year for which global temperature anomaly data are available?
year_max <- temp_carbon %>%
filter(!is.na(temp_anomaly)) %>%
.$year %>%
max()
year_max
## [1] 2018
# How many degrees Celsius has temperature increased over the date range?
diff <- temp_carbon %>%
filter(year %in% c(year_min, year_max)) %>%
.$temp_anomaly
#A:
diff
## [1] -0.11 0.82
diff[1] - diff[2]
## [1] -0.93
Question 4 Create a time series line plot of the temperature anomaly. Only include years where temperatures are reported. Save this plot to the object p.
Which command adds a blue horizontal line indicating the 20th century mean temperature?
p <- temp_carbon %>%
filter(!is.na(temp_anomaly)) %>%
ggplot(aes(year, temp_anomaly)) +
geom_line() +
geom_hline(aes(yintercept=0), color='blue')
p
Question 5
Continue working with p, the plot created in the previous question.
Change the y-axis label to be “Temperature anomaly (degrees C)”. Add a title, “Temperature anomaly relative to 20th century mean, 1880-2018”. Also add a text layer to the plot: the x-coordinate should be 2000, the y-coordinate should be 0.05, the text should be “20th century mean”, and the text color should be blue.
q <- temp_carbon %>%
filter(!is.na(temp_anomaly)) %>%
ggplot(aes(year, temp_anomaly)) +
geom_line() +
geom_hline(aes(yintercept=0), color='blue') +
ylab("Temperature anomaly (degrees C)") +
ggtitle("Temperature anomaly relative to 20th century mean, 1880-2018") +
geom_text(aes(x=2000, y=0.05, label="20th century mean"), col='blue')
q
Question 6
When was the earliest year with a temperature above the 20th century mean?
year_min <- temp_carbon %>%
filter(!is.na(temp_anomaly) & temp_anomaly>0) %>%
.$year %>%
min()
year_min
## [1] 1939
When was the last year with an average temperature below the 20th century mean?
year_max <- temp_carbon %>%
filter(!is.na(temp_anomaly) & temp_anomaly<0) %>%
.$year %>%
max()
year_max
## [1] 1976
In what year did the temperature anomaly exceed 0.5 degrees Celsius for the first time?
year_ <- temp_carbon %>%
filter(!is.na(temp_anomaly) & temp_anomaly>0.5) %>%
.$year %>%
min()
year_
## [1] 1997
Question 7 Add layers to the previous plot to include line graphs of the temperature anomaly in the ocean (ocean_anomaly) and on land (land_anomaly). Assign different colors to the lines. Compare the global temperature anomaly to the land temperature anomaly and ocean temperature anomaly.
Which region has the largest 2018 temperature anomaly relative to the 20th century mean?
temp_carbon %>%
filter(!is.na(temp_anomaly)) %>%
ggplot(aes(year, temp_anomaly)) +
geom_line(col='red') +
geom_hline(aes(yintercept=0), color='blue') +
xlim(c(1880, 2018)) +
ylab("Temperature anomaly (degrees C)") +
ggtitle("Temperature anomaly relative to 20th century mean, 1880-2018") +
geom_text(aes(x=2000, y=0.05, label="20th century mean"), col='blue') +
geom_line(aes(year, ocean_anomaly), col='cyan') +
geom_line(aes(year, land_anomaly), col='green')
Question 8 A major determinant of Earth’s temperature is the greenhouse effect. Many gases trap heat and reflect it towards the surface, preventing heat from escaping the atmosphere. The greenhouse effect is vital in keeping Earth at a warm enough temperature to sustain liquid water and life; however, changes in greenhouse gas levels can alter the temperature balance of the planet.
The greenhouse_gases data frame from dslabs contains concentrations of the three most significant greenhouse gases: carbon dioxide ( CO2 , abbreviated in the data as co2), methane ( CH4 , ch4 in the data), and nitrous oxide ( N2O , n2o in the data). Measurements are provided every 20 years for the past 2000 years.
str(greenhouse_gases)
## 'data.frame': 300 obs. of 3 variables:
## $ year : num 20 40 60 80 100 120 140 160 180 200 ...
## $ gas : chr "CO2" "CO2" "CO2" "CO2" ...
## $ concentration: num 278 278 277 277 278 ...
Complete the code outline below to make a line plot of concentration on the y-axis by year on the x-axis. Facet by gas, aligning the plots vertically so as to ease comparisons along the year axis. Add a vertical line with an x-intercept at the year 1850, noting the unofficial start of the industrial revolution and widespread fossil fuel consumption. Note that the units for ch4 and n2o are ppb while the units for co2 are ppm.
greenhouse_gases %>%
ggplot(aes(year, concentration)) +
geom_line() +
facet_grid(gas ~ ., scales = "free") +
geom_vline(xintercept = 1850, col='red') +
ylab("Concentration (ch4/n2o ppb, co2 ppm)") +
ggtitle("Atmospheric greenhouse gas concentration by year, 0-2000")
Question 10 Make a time series line plot of carbon emissions (carbon_emissions) from the temp_carbon dataset. The y-axis is metric tons of carbon emitted per year.
temp_carbon %>%
filter(!is.na(carbon_emissions)) %>%
ggplot(aes(year, carbon_emissions)) +
geom_line()
Question 11
We saw how greenhouse gases have changed over the course of human history, but how has CO2 (co2 in the data) varied over a longer time scale? The historic_co2 data frame in dslabs contains direct measurements of atmospheric co2 from Mauna Loa since 1959 as well as indirect measurements of atmospheric co2 from ice cores dating back 800,000 years.
Make a line plot of co2 concentration over time (year), coloring by the measurement source (source). Save this plot as co2_time for later use.
co2_time <- historic_co2 %>%
filter(!is.na(co2)) %>%
ggplot(aes(year, co2, col=source)) +
geom_line() +
ggtitle("Atmospheric CO2 concentration, -800,000 BC to today") +
ylab("co2 (ppmv)")
co2_time
Question 12
One way to differentiate natural co2 oscillations from today’s manmade co2 spike is by examining the rate of change of co2. The planet is affected not only by the absolute concentration of co2 but also by its rate of change. When the rate of change is slow, living and nonliving systems have time to adapt to new temperature and gas levels, but when the rate of change is fast, abrupt differences can overwhelm natural systems. How does the pace of natural co2 change differ from the current rate of change?
Use the co2_time plot saved above. Change the limits as directed to investigate the rate of change in co2 over various periods with spikes in co2 concentration.